Goto

Collaborating Authors

 Gambling





The Good Old Days of Sports Gambling

The New Yorker

Recent memoirs by the retired bookie Art Manteris and the storied gambler Billy Walters provide a glimpse of an industry in its fledgling form--and a preview of the DraftKings era to come. Las Vegas is no longer the seat of the sportsbook gods. In most states, it's now legal, and extremely popular, to place bets using apps or websites such as FanDuel and DraftKings. From your couch, you can wager on everything from the results of snooker championships to the color of the Gatorade poured over the victorious coach after the Super Bowl. The N.F.L., along with the other major-league American sports associations, has officially partnered with sports-betting sites, and their alliance has proved so lucrative that other industries want in on the action; last month, the Golden Globes made a deal with Polymarket, a predictions-market platform, to encourage wagering (or "trading," if you prefer) on the outcomes of its awards race.


Is AI taking the fun out of fantasy football?

BBC News

Is AI taking the fun out of fantasy football? For years, fantasy football has given every armchair manager the space to back up claims they could do a better job than the real thing. Whether you're competing against workmates, family members or strangers, the ability to pull together your own dream team is irresistible to millions of football fans. The competitive pastime has spawned a whole industry of content creators offering weekly tips for anyone looking to gain an edge as they sift through stats and manage transfers. Recently, more players have been turning to Artificial Intelligence (AI) tools for advice - but not everyone agrees they have a place in the virtual dugout.


Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork

Neural Information Processing Systems

As there exist competitive subnetworks within a dense network in concert with Lottery Ticket Hypothesis, we introduce a novel neuron-wise task incremental learning method, namely Data-free Subnetworks (DSN), which attempts to enhance the elastic knowledge transfer across the tasks that sequentially arrive. Specifically, DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including the reused neurons from prior tasks via neuron-wise masks. And it also transfers possibly valuable knowledge to the earlier tasks via data-free replay.


Distributionally Robust Ensemble of Lottery Tickets Towards Calibrated Sparse Network Training

Neural Information Processing Systems

The recently developed sparse network training methods, such as Lottery Ticket Hypothesis (LTH) and its variants, have shown impressive learning capacity by finding sparse sub-networks from a dense one. While these methods could largely sparsify deep networks, they generally focus more on realizing comparable accuracy to dense counterparts yet neglect network calibration. However, how to achieve calibrated network predictions lies at the core of improving model reliability, especially when it comes to addressing the overconfident issue and out-of-distribution cases. In this study, we propose a novel Distributionally Robust Optimization (DRO) framework to achieve an ensemble of lottery tickets towards calibrated network sparsification. Specifically, the proposed DRO ensemble aims to learn multiple diverse and complementary sparse sub-networks (tickets) with the guidance of uncertainty sets, which encourage tickets to gradually capture different data distributions from easy to hard and naturally complement each other. We theoretically justify the strong calibration performance by showing how the proposed robust training process guarantees to lower the confidence of incorrect predictions. Extensive experimental results on several benchmarks show that our proposed lottery ticket ensemble leads to a clear calibration improvement without sacrificing accuracy and burdening inference costs. Furthermore, experiments on OOD datasets demonstrate the robustness of our approach in the open-set environment.


One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers

Neural Information Processing Systems

The success of lottery ticket initializations (Frankle and Carbin, 2019) suggests that small, sparsified networks can be trained so long as the network is initialized appropriately. Unfortunately, finding these winning ticket'' initializations is computationally expensive. One potential solution is to reuse the same winning tickets across a variety of datasets and optimizers. However, the generality of winning ticket initializations remains unclear. Here, we attempt to answer this question by generating winning tickets for one training configuration (optimizer and dataset) and evaluating their performance on another configuration. Perhaps surprisingly, we found that, within the natural images domain, winning ticket initializations generalized across a variety of datasets, including Fashion MNIST, SVHN, CIFAR-10/100, ImageNet, and Places365, often achieving performance close to that of winning tickets generated on the same dataset. Moreover, winning tickets generated using larger datasets consistently transferred better than those generated using smaller datasets. We also found that winning ticket initializations generalize across optimizers with high performance. These results suggest that winning ticket initializations generated by sufficiently large datasets contain inductive biases generic to neural networks more broadly which improve training across many settings and provide hope for the development of better initialization methods.


Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory

Neural Information Processing Systems

Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks. Such heuristic pruning strategies are hard to guarantee that the pruned networks achieve test accuracy comparable to the original dense ones. Recent works have empirically identified and verified the Lottery Ticket Hypothesis (LTH): a randomly-initialized dense neural network contains an extremely sparse subnetwork, which can be trained to achieve similar accuracy to the former. Due to the lack of theoretical evidence, they often need to run multiple rounds of expensive training and pruning over the original large networks to discover the sparse subnetworks with low accuracy loss.


Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective

Neural Information Processing Systems

The lottery ticket hypothesis (LTH) has attracted attention because it can explain why over-parameterized models often show high generalization ability. It is known that when we use iterative magnitude pruning (IMP), which is an algorithm to find sparse networks with high generalization ability that can be trained from the initial weights independently, called winning tickets, the initial large learning rate does not work well in deep neural networks such as ResNet. However, since the initial large learning rate generally helps the optimizer to converge to flatter minima, we hypothesize that the winning tickets have relatively sharp minima, which is considered a disadvantage in terms of generalization ability. In this paper, we confirm this hypothesis and show that the PAC-Bayesian theory can provide an explicit understanding of the relationship between LTH and generalization behavior. On the basis of our experimental findings that IMP with a small learning rate finds relatively sharp minima and that the distance from the initial weights is deeply involved in winning tickets, we offer the PAC-Bayes bound using a spike-and-slab distribution to analyze winning tickets. Finally, we revisit existing algorithms for finding winning tickets from a PAC-Bayesian perspective and provide new insights into these methods.